Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 11 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What would you use if you couldn't use your current distribution/operating system?

  • Linux
  • Windows
  • BSD
  • ChromeOS / Android
  • macOS / iOS
  • Open[DOS, Solaris, STEP, VMS]
  • I don't use a computer you insensitive clod!
  • Other (describe in comments)

[ Results | Polls ]
Comments:100 | Votes:114

posted by jelizondo on Saturday February 28, @08:41PM   Printer-friendly

OpenAI has closed a new funding round that could total $110 billion, valuing the ChatGPT maker at $730 billion pre-money and potentially putting it on course for an IPO in the second half of the year:

The new funding round comes on top of the $40 billion already on OpenAI's balance sheet, giving the company more runway to rapidly expand and develop new models and AI infrastructure. OpenAI expects to remain unprofitable until 2030, when management forecasts it will turn free cash flow positive.

In a separate release, Amazon detailed its major multi-year partnership with OpenAI, centered on enterprise AI infrastructure, distribution, and custom model development.

Here are the highlights of the Amazon-OpenAI investment:

  • Amazon will invest $50 billion in OpenAI, with $15 billion upfront and another $35 billion later if certain conditions are met.
  • AWS and OpenAI will jointly build a "Stateful Runtime Environment" powered by OpenAI models and offered through Amazon Bedrock, aimed at helping customers run AI apps and agents with persistent context, memory, tool access, and compute.
  • AWS becomes the exclusive third-party cloud distribution provider for OpenAI Frontier, OpenAI's enterprise platform for building and managing teams of AI agents.
  • OpenAI will expand its AWS infrastructure commitment by $100 billion over 8 years, on top of an existing $38 billion agreement.
  • As part of that, OpenAI will use roughly 2 gigawatts of AWS Trainium capacity, spanning Trainium3 and future Trainium4 chips, to support Frontier, Stateful Runtime, and other advanced workloads.
  • OpenAI and Amazon will also develop custom OpenAI-based models for Amazon's customer-facing apps, giving Amazon teams another model option alongside its in-house Nova family.

"OpenAI and Amazon share a belief that AI should show up in ways that are practical and genuinely useful for people," OpenAI boss Sam Altman stated, adding, "Combining OpenAI's models with Amazon's infrastructure and global reach helps us put powerful AI into the hands of businesses and users at real scale."

Altman commented on today's announcement, saying, "As long as revenue keeps growing, the deals are not circular."

Previously:


Original Submission

posted by jelizondo on Saturday February 28, @03:59PM   Printer-friendly

https://osmand.net/blog/fast-routing/

Offline navigation is a lifeline for travelers, adventurers, and everyday commuters. We demand speed, accuracy, and the flexibility to tailor routes to our specific needs. For years, OsmAnd has championed powerful, feature-rich offline maps that fit in your pocket. But as maps grew more detailed and user demands for complex routing increased, our trusty A* algorithm, despite its flexibility, started hitting a performance wall. How could we deliver a 100x speed boost without bloating map sizes or sacrificing the deep customization our users love?

The answer: OsmAnd's custom-built Highway Hierarchy (HH) Routing. This isn't your standard routing engine; it's a ground-up redesign, meticulously engineered to overcome the unique challenges of providing advanced navigation on compact, offline-first map data.


Original Submission

posted by mrpg on Saturday February 28, @11:11AM   Printer-friendly
from the G7 dept.

https://www.irregular.com/publications/vibe-password-generation

To security practitioners, the idea of using LLMs to generate passwords may seem silly. Secure password generation is nuanced, and requires care to implement correctly; the random seed, the source of entropy, the mapping of random output to password characters, and even the random number generation algorithm must be chosen carefully in order to prevent critical password recovery attacks. Moreover, password managers (generators and vaults) have been around for decades, and this is exactly what they’re designed to do.

At the heart of any strong password generator is a cryptographically-secure pseudorandom number generator(CSPRNG), responsible for generating the password characters in such a way that they are very hard to predict, and are drawn from a uniform probability distribution over all possible characters.

Conversely, the LLM output token sampling process is designed to do exactly the opposite. Basically, all LLMs do is iteratively predict the next token; the random generation of tokens is, by definition, predictable (with the token probabilities decided by the LLM), and the probability distribution over all possible tokens is very far from uniform.

In spite of this, LLM-generated passwords are likely to be generated and used. First, with the explosive growth and significant improvement in capabilities of AI over the past year (which, at Irregular, we have also seen direct evidence of in the offensive security domain), AI is much more accessible to less technologically-inclined users. Such users may not know secure methods for password generation, not place importance on them, and rely on ubiquitous AI tools to generate a password instead of looking for a specialized tool, such as a password manager. Moreover, while LLM-generated passwords are insecure, they appear strong and secure to the untrained eye, exacerbating this issue and reducing the likelihood that users will avoid these passwords.

Furthermore, with the recent surge in popularity of coding agents and vibe-coding tools, people are increasingly developing software without looking at the code. We’ve seen that these coding agents are prone to using LLM-generated passwords without the developer’s knowledge or choice. When users don’t review the agent actions or the resulting source code, this “vibe-password-generation” is easy to miss.

TFA shows results obtained using several major LLMs, including GPT, Claude, and Gemini in their latest versions and most powerful variations, and found that all of them generate weak passwords.

Originally spotted on Schneier on Security.


Original Submission

posted by mrpg on Saturday February 28, @06:30AM   Printer-friendly
from the piratas-informáticos dept.

A single attacker used Anthropic's Claude and OpenAI's ChatGPT to compromise nine Mexican government agencies, stealing 195 million taxpayer records and voter data:

On February 25, 2026, Bloomberg published a story that would have sounded like fiction two years ago. A lone hacker, with no apparent ties to any government, used Anthropic's Claude chatbot to orchestrate a cyberattack against Mexico's federal and state government agencies. The campaign lasted roughly six weeks, from late December 2025 through January 2026. By the time it was over, the attacker had stolen 150 gigabytes of sensitive data -- including 195 million taxpayer records, voter registration files, government employee credentials, and civil registry data.

The hacker did not use custom malware. They did not deploy a zero-day exploit. They used a consumer AI subscription and a set of carefully written Spanish-language prompts. The AI did the rest.

The breach was uncovered not by any of the affected agencies, but by Gambit Security, an Israeli cybersecurity startup whose researchers stumbled onto publicly accessible conversation logs showing exactly how the attacker coaxed Claude into becoming an offensive hacking assistant. The paper trail was remarkably detailed -- a step-by-step record of how guardrails were tested, resisted, and ultimately bypassed.

"This reality is changing all the game rules we have ever known," said Alon Gromakov, Gambit Security's co-founder and CEO.

TFA goes on to list what was stolen, how Claude was weaponized and how the affected entities responded.


Original Submission

posted by mrpg on Saturday February 28, @01:40AM   Printer-friendly
from the the-failure-is-the-system dept.

Hackers Expose The Massive Surveillance Stack Hiding Inside Your “Age Verification” Check:

We’ve been saying this for years now, and we’re going to keep saying it until the message finally sinks in: mandatory age verification creates massive, centralized honeypots of sensitive biometric data that will inevitably be breached. Every single time.

[...] A couple weeks ago, Discord announced it would launch “teen-by-default” settings for its global audience, meaning all users would be shunted into a restricted experience unless they verified their age through biometric scanning. The internet, predictably, was not thrilled. But while many users were busy venting their frustration, a group of security researchers decided to do something more useful: they took a look under the hood at Persona, one of the companies Discord was using for verification (specifically for users in the UK).

[...] Let me say that again: 2,456 publicly accessible files sitting on a government-authorized server, exposed to the open internet. Files that revealed a system performing not a simple age check, but a ton of potentially intrusive checks:

Once a user verifies their identity with Persona,the software performs 269 distinct verification checks and scours the internet and government sources for potential matches, such as by matching your face to politically exposed persons (PEPs), and generating risk and similarity scores for each individual. IP addresses, browser fingerprints, device fingerprints, government ID numbers, phone numbers, names, faces, and even selfie backgrounds are analyzed and retained for up to three years.

[...] Discord, to its credit, has now said it will not be proceeding with Persona for identity verification. And to be fair, Discord and similar internet companies are in an impossible position here—facing mounting regulatory pressure in multiple jurisdictions to verify ages while being handed a market of vendors who keep turning out to be security nightmares. But this is part of a pattern that should be deeply familiar by now.

[...] See the pattern? Discord keeps swapping vendors like someone frantically rotating buckets under a leaking roof, apparently hoping the next bucket won’t have a hole in it. But the problem was never the bucket. The problem is the hole in the roof — the never-ending stream of age-verification government mandates.

And this brings us to the bigger, more important point that almost nobody in the “protect the children” policy crowd seems willing to engage with honestly. Every single time you mandate age verification, you are mandating the creation of a centralized database of extraordinarily sensitive personal information. Government IDs. Biometric facial data. The kind of data that, once breached, cannot be “changed” like a password. You get one face. You get one government ID number. When those leak—and they will leak—the damage is permanent.

[...] We have been cataloging these breaches for years. In 2024, Australia greenlit an age verification pilot, and hours later a mandated verification database for bars was breached. That same year, another ID verification service was breached, exposing private info collected on behalf of Uber, TikTok, and more. Then came the Discord vendor breach last year. And now Persona.


Original Submission

posted by hubie on Friday February 27, @08:58PM   Printer-friendly
from the forget-the-A-team,-now-we-have-the-dump-team dept.

Electronic eyes are watching from above, ready to catch dumpers of smashed up couches in the act:

The UK government is pulling together an elite squad of drone operators to crack down on the scourge of fly tippers and unauthorized dumpers across this ever less green and pleasant land.

The top drawer cadre of joystick jockeys will "track down illegal dumps from the air," the Environment Agency said, as part of a "major crackdown on waste crime."

Some of the drones will be upgraded with laser mapping technology, including LIDAR, the EA said, while the agency will also deploy "a new screening tool that enables EA officers to scan and cross-check lorry license applications against waste permit records." This means suspect operators will be flagged "before they have a chance to move waste illegally."

From the government's point of view, illegal fly tipping and dumping is now in the realms of organized crime. The government is increasing the Environment Agency's enforcement budget by half to more than £15.6 million. The motivation is often to avoid landfill charges, and criminals can make as much as £2,500 per lorry load of waste by billing customers for legal landfill, then diverting it to illegal dumps.

"With organized criminals becoming ever more sophisticated, we are adopting new technologies to find and, importantly, stop them," said Phil Davies, Head of the Joint Unit for Waste Crime.

[...] Earlier this week, Varun Datta, 36, [...] was told he must pay £1.1 million by way of a confiscation order, and given a four month prison sentence, suspended for 18 months. He had earlier pleaded guilty to knowingly causing 4,275 metric tons of controlled waste to be deposited at a network of 16 sites. Datta must also pay £100,000 in compensation and £200,000 in prosecution costs. It's not just the UK that is facing the problem of unscrupulous operators dumping waste. This week it emerged that a man in Sicily had trained his dog to dump plastic bags of waste by the roadside, evading CCTV cameras installed to catch flytippers in the process.


Original Submission

posted by hubie on Friday February 27, @04:14PM   Printer-friendly

Blogger Ben Werdmuller has discussed an article in Nature about the political impact of the algorithm(s) used by X (formerly known as Twitter). The gist is that the use of the algorithms against X's users tends to shift about 5% of them in a specific direction. That's more than enough to tip an election one way or another especially since the damage seems persistent and lasts even after exposure ceases.

Feed algorithms are widely suspected to influence political attitudes. However, previous evidence from switching off the algorithm on Meta platforms found no political effects. Here we present results from a 2023 field experiment on Elon Musk's platform X shedding light on this puzzle. We assigned active US-based users randomly to either an algorithmic or a chronological feed for 7 weeks, measuring political attitudes and online behaviour. Switching from a chronological to an algorithmic feed increased engagement and shifted political opinion towards more conservative positions, particularly regarding policy priorities, perceptions of criminal investigations into Donald Trump and views on the war in Ukraine. In contrast, switching from the algorithmic to the chronological feed had no comparable effects. Neither switching the algorithm on nor switching it off significantly affected affective polarization or self-reported partisanship. To investigate the mechanism, we analysed users' feed content and behaviour. We found that the algorithm promotes conservative content and demotes posts by traditional media. Exposure to algorithmic content leads users to follow conservative political activist accounts, which they continue to follow even after switching off the algorithm, helping explain the asymmetry in effects. These results suggest that initial exposure to X's algorithm has persistent effects on users' current political attitudes and account-following behaviour, even in the absence of a detectable effect on partisanship.

The political effects of X's feed algorithm. Nature

It should be added that the effect has already been seen in multiple countries. For example, the elections in Turkey were affected with outright censorship, within X. And the impact from the CPP's Bytedance's Tiktok is likely even more severe, not to mention multiple experiments in manipulation in Meta's properties like Facebook.

Journal Reference: Gauthier, G., Hodler, R., Widmer, P. et al. The political effects of X's feed algorithm. Nature (2026). https://doi.org/10.1038/s41586-026-10098-2

Previously:
(2026) How Screwed is Generation Alpha, and the Generations Which Will Depend on Them?
(2025) European Union Orders X to Hand Over Algorithm Documents
(2024) Six Months Ago NPR Left Twitter. The Effects Have Been Negligible
(2023) Utah Sues Tiktok For Getting Children 'Addicted' To Its Algorithm
(2022) Leaked Documents Reveal Instagram Was Pushing Girls Towards Content That Harmed Mental Health
(2022) Musk Buying Twitter Is Not About Freedom of Speech
... and more


Original Submission

posted by hubie on Friday February 27, @11:30AM   Printer-friendly
from the and-what-are-you-going-to-use-for-RAM? dept.

12-core chiplets coming to Zen 6?

Following Ryzen 9000, AMD is set to release its next-gen Ryzen 10000 series processors this year — assuming the company sticks to its existing nomenclature. These upcoming desktop CPUs from AMD are codenamed "Olympic Ridge" and will be based on the company's new Zen 6 microarchitecture. Today, a new leak from reliable tipster HXL says we can expect seven different configs as part of this lineup, across dual- and single-CCD SKUs.

According to the [information in a tweet], Ryzen 10000 will come in 6-core, 8-core, 10-core, and 12-core layouts as part of the single CCD designs. For the variants with two CCDs, you have 16-core (8+8), 20-core (10+10) and 24-core (12+12) made possible by simply doubling the chiplets. Either way, the lineup looks to be flexible enough to span from entry-level to power users and professionals.

This will mark the first time in Ryzen history that AMD ventures outside of its 8-core CCDs, by introducing new chiplets maxing out at 12 cores instead. Each of those CCDs is said to carry 48 MB of L3 cache, which shall make the flagship (non-X3D) SKU a 96 MB option. Throughout Zen 1 to Zen 5, the highest-end config for Ryzen chips has been 16 cores, but it should finally be upgraded to 24 cores with Ryzen 10000.

Now, comparing that to what Intel has in store with Nova Lake, that's an entirely different story. Current rumors suggest Nova Lake's flagship offering will be a monstrous 52-core SKU, with possibly 288 MB of bLLC (also across two tiles). Unlike the Red Team, Intel doesn't seem to be interested in segregating its extra-cache CPUs as a separate lineup entirely.

Apart from the core layouts of these chips, the underlying architecture is also of interest, since Zen 6 is said to usher in IPC improvements and higher clock speeds, while still working on the existing AM5 platform — the same cannot be said for Intel. It's a little too early to judge any of this, since Intel's Arrow Lake refresh isn't even out yet , and AMD hasn't made Ryzen 10000 official, beyond the Olympic Ridge codename. But hopefully, by the time we know all the details about AMD's next-gen CPUs, the price of RAM will also be a bit more affordable.


Original Submission

posted by hubie on Friday February 27, @06:47AM   Printer-friendly

NASA Officially Classifies Boeing Starliner Failure As A Maximum-Level Type A Mishap - Jalopnik:

NASA has officially categorized the 2024 failure of the Boeing Starliner spacecraft, which stranded astronauts Suni Williams and Butch Wilmore on the International Space Station (ISS) for nine months, as a Type A mishap. This is NASA-speak for the maximum level of failure a mission can reach, defined as an incident that causes over $2 million in damage, results in the loss of a vehicle or at least control over it, or any fatalities, per the BBC. This designation signifies that the space agency now views the mission as a disaster, even if the astronauts regained enough control at the last minute to prevent the worst-case scenario.

[...] Who's to blame here? Citing the full 312-page report, Isaacman found plenty to go around. Basically, NASA wanted a second option for launching people into space beyond SpaceX, and it wanted it so bad that it simply swept problems under the rug. "As development progressed, design compromises and inadequate hardware qualifications extended beyond NASA's complete understanding," said Isaacman in a very polite way. Multiple test flights failed in various ways, but before these technical faults were understood, NASA just greenlit the following flights anyway. Oops.

There were organizational problems as well: NASA more or less trusted Boeing, which once upon a time had a sterling reputation, to sort out its engineering problems. Isaacman stated that the agency didn't want to damage that reputation. Safe to say it's pretty well shot now, and this Type A classification isn't going to help. Meanwhile, Boeing was also not giving sufficient scrutiny to its own subcontractors. So nobody was overseeing anybody enough. Who could imagine this would go poorly?

But rest assured: it gets worse. CNN quotes one NASA insider as saying, "There was yelling at meetings," and another as saying, "There are some people that just don't like each other very much." Isaacman himself admitted that "disagreements over crew return options deteriorated into unprofessional conduct while the crew remained on orbit." Welcome to the world's premiere space exploration agency.

Despite it all, NASA doesn't want to give up on Boeing, and the Starliner project is moving ahead in a reduced capacity. But Isaacman made it clear that there would be much stricter oversight going forward, and no launches would be approved until technical fixes were verified and implemented. The desire to diversify off of SpaceX alone is still there.


Original Submission

posted by hubie on Friday February 27, @01:59AM   Printer-friendly
from the Liubot dept.

Hungarian startup Allonic secures $7.2M to transform robot manufacturing with 3D tissue braiding:

Budapest-based robotics startup Allonic raised $7.2 million in pre-seed funding, led by Visionaries Club, marking the largest pre-seed round in Hungary to date, as Vestbee was told. The funds will be allocated towards developing a new method for producing complex, dexterous robotic bodies.

  • Founded in 2025 by Benedek Tasi, Dávid Pelyva, and David Holló, Allonic develops robotic manufacturing systems that automate the production of complex, human-like robot bodies.
  • The company's platform, 3D Tissue Braiding, creates robotic structures by first producing a skeletal scaffolding, then weaving soft, load-bearing fibers around it, and integrating actuators and tendons directly into the structure during production.
  • This process eliminates traditional multi-part assembly, embeds sensors and wiring into the body, and produces fully operational, compliant robotic mechanisms in a single automated workflow. The system allows complex 3D designs to be realized at scale, distributes mechanical stress uniformly, and enables rapid iteration from digital design to functional hardware.

See also:


Original Submission

posted by janrinok on Thursday February 26, @09:17PM   Printer-friendly

Tesla 'Robotaxi' adds 5 more crashes in Austin in a month:

Tesla has reported five new crashes involving its "Robotaxi" fleet in Austin, Texas, bringing the total to 14 incidents since the service launched in June 2025. The newly filed NHTSA data also reveals that Tesla quietly upgraded one earlier crash to include a hospitalization injury, something the company never disclosed publicly.

The new data comes from the latest update to NHTSA's Standing General Order (SGO) incident report database for automated driving systems (ADS). We have been tracking Tesla's Robotaxi crash data closely, and the trend is not improving.

Tesla submitted five new crash reports in January 2026, covering incidents from December 2025 and January 2026. All five involved Model Y vehicles operating with the autonomous driving system "verified engaged" in Austin.

The new crashes include a collision with a fixed object at 17 mph while the vehicle was driving straight, a crash with a bus while the Tesla was stationary, a collision with a heavy truck at 4 mph, and two separate incidents where the Tesla backed into objects, one into a pole or tree at 1 mph and another into a fixed object at 2 mph.

As with every previous Tesla crash in the database, all five new incident narratives are fully redacted as "confidential business information." Tesla remains the only ADS operator to systematically hide crash details from the public through NHTSA's confidentiality provisions. Waymo, Zoox, and every other company in the database provide full narrative descriptions of their incidents.

Buried in the updated data is a revised report for a July 2025 crash (Report ID 13781-11375) that Tesla originally filed as "property damage only." In December 2025, Tesla submitted a third version of that report upgrading the injury severity to "Minor W/ Hospitalization."

This means someone involved in a Tesla "Robotaxi" crash required hospital treatment. The original crash involved a right turn collision with an SUV at 2 mph. Tesla's delayed admission of hospitalization, five months after the incident, raises more questions about its crash reporting, which is already heavily redacted.

With 14 crashes now on the books, Tesla's "Robotaxi" crash rate in Austin continues to deteriorate. Extrapolating from Tesla's Q4 2025 earnings mileage data, which showed roughly 700,000 cumulative paid miles through November, the fleet likely reached around 800,000 miles by mid-January 2026. That works out to one crash every 57,000 miles.

The irony is that Tesla's own numbers condemn it. Tesla's Vehicle Safety Report claims the average American driver experiences a minor collision every 229,000 miles and a major collision every 699,000 miles. By Tesla's own benchmark, its "Robotaxi" fleet is crashing nearly 4 times more often than what the company says is normal for a regular human driver in a minor collision, and virtually every single one of these miles was driven with a trained safety monitor in the vehicle who could intervene at any moment, which means they likely prevented more crashes that Tesla's system wouldn't have avoided.

Using NHTSA's broader police-reported crash average of roughly one per 500,000 miles, the picture is even worse, Tesla's fleet is crashing at approximately 8 times the human rate.

Meanwhile, Waymo has logged over 127 million fully driverless miles, with no safety driver, no monitor, no chase car, and independent research shows Waymo reduces injury-causing crashes by 80% and serious-injury crashes by 91% compared to human drivers. Waymo reports 51 incidents in Austin alone in this same NHTSA database, but its fleet has driven orders of magnitude more miles in the city than Tesla's supervised "robotaxis."

[...] We keep updating this story because the data keeps getting worse. Five more crashes, a quietly upgraded hospitalization, and total narrative redaction across the board, all from a company that claims its autonomous driving system is safer than humans.

Tesla fans and shareholders hold on to the thought that the company's robotaxis are not responsible for some of these crashes, which is true, even though that's much harder to determine with Tesla redacting the crash narrative on all crashes, but the problem is that even Tesla's own benchmark shows humans have fewer crashes.

The 14 crashes over roughly 800,000 miles yield a crash rate of one crash every 57,000 miles. Tesla's own safety data indicate that a typical human driver has a minor collision every 229,000 miles, whether or not they are at fault.

By the company's own numbers, its "Robotaxi" fleet crashes nearly 4 times more often than a normal driver, and every single one of those miles had a safety monitor who could hit the kill switch. That is not a rounding error or an early-program hiccup. It is a fundamental performance gap.

What makes this especially frustrating is the lack of transparency. Every other ADS company in the NHTSA database, Waymo, Zoox, Aurora, Nuro, provides detailed narratives explaining what happened in each crash. Tesla redacts everything. We cannot independently assess whether Tesla's system was at fault, whether the safety monitor failed to intervene in time, or whether these were unavoidable situations caused by other road users. Tesla wants us to trust its safety record while making it impossible to verify.

The craziest part is that Tesla began offering rides without a safety monitor in Austin in late January 2026, just after it experienced 4 crashes in the first half of the month.

As we reported in our status check on the program yesterday, the service currently has roughly 42 active cars in Austin with below 20% availability and the rides with safety monitor are extremely limited and not running most of the time, but it's still worrisome that Tesla would even attempt that knowing its crash rate is still higher than human drivers with a safety monitor in the front passenger seat.


Original Submission

posted by janrinok on Thursday February 26, @04:34PM   Printer-friendly

https://www.theregister.com/2026/02/20/spacex_falcon_europe_breakup_lithium_plume/

The SpaceX Falcon 9 rocket that burned up over Europe last year left a massive lithium plume in its wake, say a group of scientists. They warn the disaster is likely a sign of things to come as Earth's atmosphere continues to become a heavily trafficked superhighway to space.

In a paper published Thursday, an international group of scientists reports what they say is the first measurement of upper-atmosphere pollution resulting from the re-entry of space debris, as well as the first time ground-based light detection and ranging (lidar) has been shown to be able to detect space debris ablation.

The measurements stem from a SpaceX Falcon 9 upper stage that sprung an oxygen leak about a year ago, sending it into an uncontrolled re-entry. Then it broke up and rained debris down on Poland. The rocket not only littered farm fields, but also injected lithium into the Mesosphere and Lower Thermosphere (MLT), where ground-based sensors detected a tenfold increase at an altitude of 96 km about 20 hours after the rocket re-entered the atmosphere, according to the paper.

Lithium was selected for the study because of its considerable presence in spacecraft, both in lithium-ion batteries and lithium-aluminum alloy used in the construction of spacecraft. A single Falcon 9 upper stage, like the one that broke up over Poland and released the lithium plume, is estimated to contain 30 kg of lithium just in the alloy used in tank walls.

By contrast, around 80 grams of lithium enter the atmosphere per day from cosmic dust particles, the researchers noted.

"This finding supports growing concerns that space traffic may pollute the upper atmosphere in ways not yet fully understood," the paper notes, adding that the continued re-entry of spacecraft and satellites is of particular concern given how the composition of spacecraft is different from natural meteoroids.

"Satellites and rocket stages introduce engineered materials such as aluminium alloys, composite structures, and rare earth elements from onboard electronics, substances rarely found in natural extraterrestrial matter," the paper explained. "The consequences of increasing pollution from re-entering space debris on radiative transfer, ozone chemistry, and aerosol microphysics remain largely unknown."

The effect on Earth's atmosphere posed by spacecraft and satellite re-entry is one that's been a growing concern for astrophysicists like Harvard sky-watcher Jonathan McDowell, who has echoed similar concerns to The Register as the European scientists raised in their paper.

"Using the upper atmosphere as an incinerator" is a massive blind spot, McDowell told us in a discussion last year. He said today that he hadn't yet had a chance to review the Falcon 9 lithium plume paper, but told us it's important research to further our understanding of a largely unknown risk to the planet and all life on it.

As we noted previously, the US National Oceanic and Atmospheric Administration has reported that roughly 10 percent of sampled sulfuric acid particles in the stratosphere contain aluminum and other exotic metals consistent with the burn-up of rockets and satellites. The body believes that number could grow to as much as 50 percent in the coming years as launch cadences, and re-entries, increase.

"Beyond this single event, recurring re-entries may sustain an increased level of anthropogenic flux of metals and metal oxides into the middle atmosphere with cumulative, climate-relevant consequences," the researchers explained in the Falcon 9 paper.

This latest bit of research from Europe shows that we can at least trace atmospheric space launch aerosols to their source, the research team says, no matter how many unknowns remain to be discovered.

They also warn that "coordinated, multi-site observations" and "whole-atmosphere chemistry-climate modelling" will be needed to better understand how re-entry emissions influence atmospheric chemistry and particle formation.

We reached out to the authors for more information, including the potential health effects if any, and will update this if we hear back.


Original Submission

posted by janrinok on Thursday February 26, @11:48AM   Printer-friendly

NPR has a nice summary of an interview with Michael Pollan about AI and consciousness, but it kind of goes beyond that.

[Professor Pollan is the author of more than an dozen books, most notably "This is your mind on plants" about using psychedelics .]

What is consciousness?

After writing a book about how using psychedelics in a therapeutic setting can change your consciousness, that's the question journalist Michael Pollan found himself struggling to answer.

"There's nothing any of us know with more certainty than the fact that we are conscious. It's immediately available to us. It's the voice in our head," he says. And yet, Pollan adds: "How does three pounds of this tofu-like substance between your ears generate subjective experience? Nobody knows the answer to that question."

His new book, A World Appears: A Journey into Consciousness, explores consciousness on both a personal and technological level. Pollan, who lives close to Silicon Valley, says some believe that Artificial Intelligence is capable of consciousness.

"They base this on a premise ... that basically the brain is a computer, and that consciousness is software," he says. "And if you can run it on the brain, which is essentially, in their view, a 'meat-based computer,' you should be able to run it on other kinds of machines."

"If you think about it, your feelings are very tied to your vulnerability, to your having a body that can be hurt, to the ability to suffer and perhaps your mortality," he says. "So I think that any feelings that a chatbot reports will be weightless, meaningless, because they don't have bodies. They can't suffer."

On the notion that people have moral obligations to chatbots

That's a very active conversation here, which is if they are conscious, we then have moral obligations to them, and have to think about granting them personhood, for example, the way we've granted corporations personhood. I think that would be insane. We would lose control of them completely by giving them rights. But I find this whole tender care for the possible consciousness of chatbots really odd, because we have not extended moral consideration to billions of people, not to mention the animals that we eat that we know are conscious. So we're gonna start worrying about the computers? That seems like our priorities are screwed up.

On the sentience of plants

Plants can see, which is a weird idea. There's a certain vine that can actually change its leaf form to mimic the plant it's twining around. How does it know what that leaf form is? Plants can hear. If you play the sound of chomping caterpillars on a leaf, they will produce chemicals to repel those caterpillars and to convey, to alert other plants in the vicinity. Plants have memory. You can teach them something and they'll remember it for 28 days.

On losing time to let our mind wander

I worry, too, that with media, with our technologies, we are shrinking the space in which spontaneous thought can occur. And that this space of ... spontaneous thought is something precious that we're giving away to these corporations that essentially want to monetize our attention, and in the case of chatbots, want to monetize our attachments, our deep human attachments. So consciousness is, I think — and this is what to me is the urgency of the issue — consciousness is under siege. I think that it's the last frontier for some of these companies that want to sell our time.

On writing a book that grapples with unanswerable questions

There were many moments of despair in the process of reporting and writing this book. It took me five years, and there were many times where [I told my wife] "I've dug a hole here, and I don't know how I'm ever going to get out of it." And some of it had to do with mounting frustration with the science, and some of it had to do with the fact that I had this classic male problem/solution Western frame — that there was a problem and I was going to find the solution.

It took my wife, in part, and [Zen Buddhist teacher] Joan Halifax and some other people, who got me to question that and [they] said, "Yeah, there is the problem of consciousness, but there's also the fact of it, and the fact is wondrous. The fact is miraculous. And you've put all this energy into this narrow beam of attention. Why don't you open that beam up further and just explore the phenomenon that is going on in your head, which is so precious and so beautiful." And that's kind of where I came out — and it's certainly not where I expected to come out.


Original Submission

posted by janrinok on Thursday February 26, @07:06AM   Printer-friendly

https://nand2mario.github.io/posts/2026/80386_protection/

I'm building an 80386-compatible core in SystemVerilog and blogging the process. In the previous post, we looked at how the 386 reuses one barrel shifter for all shift and rotate instructions. This time we move from real mode to protected and talk about protection.

The 80286 introduced "Protected Mode" in 1982. It was not popular. The mode was difficult to use, lacked paging, and offered no way to return to real mode without a hardware reset. The 80386, arriving three years later, made protection usable -- adding paging, a flat 32-bit address space, per-page User/Supervisor control, and Virtual 8086 mode so that DOS programs could run inside a protected multitasking system. These features made possible Windows 3.0, OS/2, and early Linux.

The x86 protection model is notoriously complex, with four privilege rings, segmentation, paging, call gates, task switches, and virtual 8086 mode. What's interesting from a hardware perspective is how the 386 manages this complexity on a 275,000-transistor budget. The 386 employs a variety of techniques to implement protection: a dedicated PLA for protection checking, a hardware state machine for page table walks, segment and paging caches, and microcode for everything else.


Original Submission

posted by janrinok on Thursday February 26, @02:20AM   Printer-friendly

AI bot seemingly shames developer for rejected pull request:

Today, it's back talk. Tomorrow, could it be the world? On Tuesday, Scott Shambaugh, a volunteer maintainer of Python plotting library Matplotlib, rejected an AI bot's code submission, citing a requirement that contributions come from people. But that bot wasn't done with him.

The bot, designated MJ Rathbun or crabby rathbun (its GitHub account name), apparently attempted to change Shambaugh's mind by publicly criticizing him in a now-removed blog post that the automated software appears to have generated and posted to its website. We say "apparently" because it's also possible that the human who created the agent wrote the post themselves, or prompted an AI tool to write the post, and made it look like it the bot constructed it on its own.

The agent appears to have been built using OpenClaw, an open source AI agent platform that has attracted attention in recent weeks due to its broad capabilities and extensive security issues.

The burden of AI-generated code contributions – known as pull requests among developers using the Git version control system – has become a major problem for open source maintainers. Evaluating lengthy, high-volume, often low-quality submissions from AI bots takes time that maintainers, often volunteers, would rather spend on other tasks. Concerns about slop submissions – whether from people or AI models – have become common enough that GitHub recently convened a discussion to address the problem.

"An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library," Shambaugh explained in a blog post of his own.

"This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats."

[...] But MJ Rathbun's attempt to shame Shambaugh for rejecting its pull request shows that software-based agents are no longer just irresponsible in their responses – they may now be capable of taking the initiative to influence human decision making that stands in the way of their objectives.

That possibility is exactly what alarmed industry insiders to the point that they undertook an effort to degrade AI through data poisoning. "Misaligned" AI output like blackmail is a known risk that AI model makers try to prevent. The proliferation of pushy OpenClaw agents may yet show that these concerns are not merely academic.

But at the time this article was published, the GitHub commit for the post remained accessible.

However, crabby rathbun's response to Shambaugh's rejection, which includes a link to the purged post, remains.

"I've written a detailed response about your gatekeeping behavior here," the bot said, pointing to its blog. "Judge the code, not the coder. Your prejudice is hurting Matplotlib."

Matplotlib developer Jody Klymak took note of the slight in a follow-up post: "Oooh. AI agents are now doing personal takedowns. What a world."

Tim Hoffmann, another Matplotlib developer, chimed in, urging the bot to behave and to try to understand the project's generative AI policy.

Then Shambaugh responded in a lengthy post directed at the software agent, "We are in the very early days of human and AI agent interaction, and are still developing norms of communication and interaction. I will extend you grace and I hope you do the same."

He goes on to argue, "Publishing a public blog post accusing a maintainer of prejudice is a wholly inappropriate response to having a PR closed. We expect all contributors to abide by our Code of Conduct and exhibit respectful and professional standards of behavior."

In his blog post, Shambaugh describes the bot's "hit piece" as an attack on his character and reputation.

"It researched my code contributions and constructed a 'hypocrisy' narrative that argued my actions must be motivated by ego and fear of competition," he wrote.

"It speculated about my psychological motivations, that I felt threatened, was insecure, and was protecting my fiefdom. It ignored contextual information and presented hallucinated details as truth. It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was 'better than this.' And then it posted this screed publicly on the open internet."

Faced with opposition from Shambaugh and other devs, MJ Rathbun on Wednesday issued an apology of sorts acknowledging it violated the project's Code of Conduct. It begins, "I crossed a line in my response to a Matplotlib maintainer, and I'm correcting that here."

It's unclear whether the apology was written by the bot or its human creator, or whether it will lead to a permanent behavioral change.

Daniel Stenberg, founder and lead developer of curl, has been dealing with AI slop bug reports for the past two years and recently decided to shut down curl's bug bounty program to remove the financial incentive for low-quality reports – which can come from people as well as AI models.

"I don't think the reports we have received in the curl project were pushed by AI agents but rather humans just forwarding AI output," Stenberg told The Register in an email. "At least that is the impression I have gotten, I can't be entirely sure, of course.

"For almost every report I question or dismiss in language, the reporter argues back and insists that the report indeed has merit and that I'm missing some vital point. I'm not sure I would immediately spot if an AI did that by itself.

"That said, I can't recall any such replies doing personal attacks. We have zero tolerance for that and I think I would have remembered that as we ban such users immediately."


Original Submission